# Dynamic masking training
Roberta Base Mr
A transformers model pre-trained on large-scale Marathi corpus using self-supervised learning, primarily for masked language modeling and downstream task fine-tuning
Large Language Model
R
flax-community
156
1
Roberta Small Bulgarian
This is a streamlined version of the Bulgarian RoBERTa model, containing only 6 hidden layers while maintaining comparable performance.
Large Language Model Other
R
iarfmoose
21
0
Featured Recommended AI Models